Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 108
Filter
1.
Journal of Biomedical Engineering ; (6): 286-294, 2023.
Article in Chinese | WPRIM | ID: wpr-981541

ABSTRACT

The existing automatic sleep staging algorithms have the problems of too many model parameters and long training time, which in turn results in poor sleep staging efficiency. Using a single channel electroencephalogram (EEG) signal, this paper proposed an automatic sleep staging algorithm for stochastic depth residual networks based on transfer learning (TL-SDResNet). Firstly, a total of 30 single-channel (Fpz-Cz) EEG signals from 16 individuals were selected, and after preserving the effective sleep segments, the raw EEG signals were pre-processed using Butterworth filter and continuous wavelet transform to obtain two-dimensional images containing its time-frequency joint features as the input data for the staging model. Then, a ResNet50 pre-trained model trained on a publicly available dataset, the sleep database extension stored in European data format (Sleep-EDFx) was constructed, using a stochastic depth strategy and modifying the output layer to optimize the model structure. Finally, transfer learning was applied to the human sleep process throughout the night. The algorithm in this paper achieved a model staging accuracy of 87.95% after conducting several experiments. Experiments show that TL-SDResNet50 can accomplish fast training of a small amount of EEG data, and the overall effect is better than other staging algorithms and classical algorithms in recent years, which has certain practical value.


Subject(s)
Humans , Sleep Stages , Algorithms , Sleep , Wavelet Analysis , Electroencephalography/methods , Machine Learning
2.
Chinese Journal of Medical Instrumentation ; (6): 248-253, 2022.
Article in Chinese | WPRIM | ID: wpr-928898

ABSTRACT

To solve the problem of real-time detection and removal of EEG signal noise in anesthesia depth monitoring, we proposed an adaptive EEG signal noise detection and removal method. This method uses discrete wavelet transform to extract the low-frequency energy and high-frequency energy of a segment of EEG signals, and sets two sets of thresholds for the low-frequency band and high-frequency band of the EEG signal. These two sets of thresholds can be updated adaptively according to the energy situation of the most recent EEG signal. Finally, we judge the level of signal interference according to the range of low-frequency energy and high-frequency energy, and perform corresponding denoising processing. The results show that the method can more accurately detect and remove the noise interference in the EEG signal, and improve the stability of the calculated characteristic parameters.


Subject(s)
Algorithms , Electroencephalography , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Wavelet Analysis
3.
Journal of Biomedical Engineering ; (6): 293-300, 2022.
Article in Chinese | WPRIM | ID: wpr-928225

ABSTRACT

In recent years, epileptic seizure detection based on electroencephalogram (EEG) has attracted the widespread attention of the academic. However, it is difficult to collect data from epileptic seizure, and it is easy to cause over fitting phenomenon under the condition of few training data. In order to solve this problem, this paper took the CHB-MIT epilepsy EEG dataset from Boston Children's Hospital as the research object, and applied wavelet transform for data augmentation by setting different wavelet transform scale factors. In addition, by combining deep learning, ensemble learning, transfer learning and other methods, an epilepsy detection method with high accuracy for specific epilepsy patients was proposed under the condition of insufficient learning samples. In test, the wavelet transform scale factors 2, 4 and 8 were set for experimental comparison and verification. When the wavelet scale factor was 8, the average accuracy, average sensitivity and average specificity was 95.47%, 93.89% and 96.48%, respectively. Through comparative experiments with recent relevant literatures, the advantages of the proposed method were verified. Our results might provide reference for the clinical application of epilepsy detection.


Subject(s)
Child , Humans , Algorithms , Deep Learning , Electroencephalography , Epilepsy/diagnosis , Seizures/diagnosis , Signal Processing, Computer-Assisted , Wavelet Analysis
4.
J. bras. psiquiatr ; 70(3): 193-202, jul.-set. 2021. tab, graf
Article in English | LILACS | ID: biblio-1350953

ABSTRACT

OBJECTIVE: The aim of this study was to use a wavelet technique to determine whether the number of suicides is similar between developed and emerging countries. METHODS: Annual data were obtained from World Health Organization (WHO) reports from 1986 to 2015. Discrete nondecimated wavelet transform was used for the analysis, and the Daubechies wavelet function was applied with five-level decomposition. Regarding clustering, energy (variance) was used to analyze the clusters and visualize the clustering process. We constructed a dendrogram using the Mahalanobis distance. The number of groups was set using a specific function in the R program. RESULTS: The cluster analysis verified the formation of four groups as follows: Japan, the United States and Brazil were distinct and isolated groups, and other countries (Austria, Belgium, Chile, Israel, Mexico, Italy and the Netherlands) constituted a single group. CONCLUSION: The methods utilized in this paper enabled a detailed verification of countries with similar behaviors despite very distinct socioeconomic, geographic and climate characteristics.


OBJETIVO: Verificar se existe relação de similaridade entre o número de suicídio em países desenvolvidos e emergentes usando a técnica de ondaletas. MÉTODOS: Os dados anuais foram obtidos a partir do relatório da Organização Mundial da Saúde (OMS), no período de 1986 a 2015. Para análise, foi empregada a transformada discreta não decimada de ondaleta (NDWT), a função ondaleta aplicada foi a Daubechies com cinco níveis de decomposição. Com relação ao agrupamento, utilizou-se a energia (variância) para analisar os clusters e, para a visualização do processo de clusterização, trabalhamos com o dendograma, no qual se empregou a distância de Mahalanobis. A quantidade de grupos foi definida por meio da função NbCluster. RESULTADOS: A partir da análise de cluster, verificou-se a formação de quatros grupos. No qual, Japão e Estados Unidos e Brasil localizam-se em grupos distintos e isolados. E os demais países (Áustria, Bélgica, Chile, Israel, México, Itália e Holanda) em um único grupo. CONCLUSÃO: Utilizando esse método, foi possível verificar com mais detalhes quais países apresentaram comportamentos semelhantes, mesmo apresentando características bem distintas entre si, tanto socioeconômica, geográfica e climática.


Subject(s)
Humans , Male , Female , Adolescent , Adult , Aged , Suicide/psychology , Suicide/statistics & numerical data , Developed Countries , Developing Countries , Wavelet Analysis , Time Series Studies , Risk Factors , Mental Disorders/epidemiology
5.
Journal of Biomedical Engineering ; (6): 764-773, 2021.
Article in Chinese | WPRIM | ID: wpr-888237

ABSTRACT

The dynamic electrocardiogram (ECG) collected by wearable devices is often corrupted by motion interference due to human activities. The frequency of the interference and the frequency of the ECG signal overlap with each other, which distorts and deforms the ECG signal, and then affects the accuracy of heart rate detection. In this paper, a heart rate detection method that using coarse graining technique was proposed. First, the ECG signal was preprocessed to remove the baseline drift and the high-frequency interference. Second, the motion-related high amplitude interference exceeding the preset threshold was suppressed by signal compression method. Third, the signal was coarse-grained by adaptive peak dilation and waveform reconstruction. Heart rate was calculated based on the frequency spectrum obtained from fast Fourier transformation. The performance of the method was compared with a wavelet transform based QRS feature extraction algorithm using ECG collected from 30 volunteers at rest and in different motion states. The results showed that the correlation coefficient between the calculated heart rate and the standard heart rate was 0.999, which was higher than the result of the wavelet transform method (


Subject(s)
Humans , Electrocardiography , Heart Rate , Signal Processing, Computer-Assisted , Wavelet Analysis , Wearable Electronic Devices
6.
Journal of Biomedical Engineering ; (6): 473-482, 2021.
Article in Chinese | WPRIM | ID: wpr-888203

ABSTRACT

The brain-computer interface (BCI) systems used in practical applications require as few electroencephalogram (EEG) acquisition channels as possible. However, when it is reduced to one channel, it is difficult to remove the electrooculogram (EOG) artifacts. Therefore, this paper proposed an EOG artifact removal algorithm based on wavelet transform and ensemble empirical mode decomposition. Firstly, the single channel EEG signal is subjected to wavelet transform, and the wavelet components which involve EOG artifact are decomposed by ensemble empirical mode decomposition. Then the predefined autocorrelation coefficient threshold is used to automatically select and remove the intrinsic modal functions which mainly composed of EOG components. And finally the 'clean' EEG signal is reconstructed. The comparative experiments on the simulation data and the real data show that the algorithm proposed in this paper solves the problem of automatic removal of EOG artifacts in single-channel EEG signals. It can effectively remove the EOG artifacts when causes less EEG distortion and has less algorithm complexity at the same time. It helps to promote the BCI technology out of the laboratory and toward commercial application.


Subject(s)
Algorithms , Artifacts , Computer Simulation , Electroencephalography , Signal Processing, Computer-Assisted , Wavelet Analysis
7.
Chinese Journal of Medical Instrumentation ; (6): 1-5, 2021.
Article in Chinese | WPRIM | ID: wpr-880412

ABSTRACT

The ECG signal is susceptible to interference from the external environment during the acquisition process, affecting the analysis and processing of the ECG signal. After the traditional soft-hard threshold function is processed, there is a defect that the signal quality is not high and the continuity at the threshold is poor. An improved threshold function wavelet denoising is proposed, which has better regulation and continuity, and effectively solves the shortcomings of traditional soft and hard threshold functions. The Matlab simulation is carried out through a large amount of data, and various processing methods are compared. The results show that the improved threshold function can improve the denoising effect and is superior to the traditional soft and hard threshold denoising.


Subject(s)
Algorithms , Computer Simulation , Electrocardiography , Signal Processing, Computer-Assisted , Wavelet Analysis
8.
Journal of Biomedical Engineering ; (6): 1181-1192, 2021.
Article in Chinese | WPRIM | ID: wpr-921860

ABSTRACT

The detection of electrocardiogram (ECG) characteristic wave is the basis of cardiovascular disease analysis and heart rate variability analysis. In order to solve the problems of low detection accuracy and poor real-time performance of ECG signal in the state of motion, this paper proposes a detection algorithm based on segmentation energy and stationary wavelet transform (SWT). Firstly, the energy of ECG signal is calculated by segmenting, and the energy candidate peak is obtained after moving average to detect QRS complex. Secondly, the QRS amplitude is set to zero and the fifth component of SWT is used to locate P wave and T wave. The experimental results show that compared with other algorithms, the algorithm in this paper has high accuracy in detecting QRS complex in different motion states. It only takes 0.22 s to detect QSR complex of a 30-minute ECG record, and the real-time performance is improved obviously. On the basis of QRS complex detection, the accuracy of P wave and T wave detection is higher than 95%. The results show that this method can improve the efficiency of ECG signal detection, and provide a new method for real-time ECG signal classification and cardiovascular disease diagnosis.


Subject(s)
Humans , Algorithms , Arrhythmias, Cardiac , Electrocardiography , Heart Rate , Signal Processing, Computer-Assisted , Wavelet Analysis
9.
Journal of Biomedical Engineering ; (6): 1035-1042, 2021.
Article in Chinese | WPRIM | ID: wpr-921843

ABSTRACT

It is very important for epilepsy treatment to distinguish epileptic seizure and non-seizure. In this study, an automatic seizure detection algorithm based on dual density dual tree complex wavelet transform (DD-DT CWT) for intracranial electroencephalogram (iEEG) was proposed. The experimental data were collected from 15 719 competition data set up by the National Institutes of Health (NINDS) in Kaggle. The processed database consisted of 55 023 seizure epochs and 501 990 non-seizure epochs. Each epoch was 1 second long and contained 174 sampling points. Firstly, the signal was resampled. Then, DD-DT CWT was used for EEG signal processing. Four kinds of features include wavelet entropy, variance, energy and mean value were extracted from the signal. Finally, these features were sent to least squares-support vector machine (LS-SVM) for learning and classification. The appropriate decomposition level was selected by comparing the experimental results under different wavelet decomposition levels. The experimental results showed that the features selected in this paper were different between seizure and non-seizure. Among the eight patients, the average accuracy of three-level decomposition classification was 91.98%, the sensitivity was 90.15%, and the specificity was 93.81%. The work of this paper shows that our algorithm has excellent performance in the two classification of EEG signals of epileptic patients, and can detect the seizure period automatically and efficiently.


Subject(s)
Humans , Algorithms , Electroencephalography , Epilepsy/diagnosis , Seizures/diagnosis , Signal Processing, Computer-Assisted , Support Vector Machine , Wavelet Analysis
10.
Journal of Biomedical Engineering ; (6): 838-847, 2021.
Article in Chinese | WPRIM | ID: wpr-921821

ABSTRACT

General anesthesia is an essential part of surgery to ensure the safety of patients. Electroencephalogram (EEG) has been widely used in anesthesia depth monitoring for abundant information and the ability of reflecting the brain activity. The paper proposes a method which combines wavelet transform and artificial neural network (ANN) to assess the depth of anesthesia. Discrete wavelet transform was used to decompose the EEG signal, and the approximation coefficients and detail coefficients were used to calculate the 9 characteristic parameters. Kruskal-Wallis statistical test was made to these characteristic parameters, and the test showed that the parameters were statistically significant for the differences of the four levels of anesthesia: awake, light anesthesia, moderate anesthesia and deep anesthesia (


Subject(s)
Humans , Algorithms , Anesthesia, General , Electroencephalography , Neural Networks, Computer , Wavelet Analysis
11.
Rev. cuba. inform. méd ; 12(2): e394, tab, graf
Article in Spanish | CUMED, LILACS | ID: biblio-1144459

ABSTRACT

En radiología se utilizan varias técnicas imagenológicas para el diagnóstico de enfermedades y la asistencia en intervenciones quirúrgicas con el objetivo de determinar la ubicación y dimensión exacta de un tumor cerebral. Técnicas como la Tomografía por Emisión de Positrones y la Resonancia Magnética permiten determinar la naturaleza maligna o benigna de un tumor cerebral y estudiar las estructuras del cerebro con neuroimágenes de alta resolución. Investigadores a nivel internacional han utilizado diferentes técnicas para la fusión de la Tomografía por Emisión de Positrones y Resonancia Magnética al permitir la observación de las características fisiológicas en correlación con las estructuras anatómicas. La presente investigación tiene como objetivo elaborar un proceso para la fusión de neuroimágenes de Tomografía por Emisión de Positrones y Resonancia Magnética. Para ello se definieron 5 actividades en el proceso y los algoritmos a utilizar en cada una, lo cual propició identificar los más eficientes para aumentar la calidad en el proceso de fusión. Como resultado se obtuvo un proceso de fusión de neuroimágenes basado en un esquema híbrido Wavelet y Curvelet que garantiza obtener imágenes fusionadas de alta calidad(AU)


In radiology, various imaging techniques are used for the diagnosis of diseases and assistance in surgical interventions with the aim of determining the exact location and dimension of a brain tumor. Techniques such as Positron Emission Tomography and Magnetic Resonance can determine the malignant or benign nature of a brain tumor and study brain structures with high-resolution neuroimaging. International researchers have used different techniques for the fusion of Positron Emission Tomography and Magnetic Resonance, allowing the observation of physiological characteristics in correlation with anatomical structures. The present research aims to develop a process for the fusion of neuroimaging of Positron Emission Tomography and Magnetic Resonance Imaging. Five activities were defined in the process and the algorithms to be used in each one, which led identifying the most efficient ones to increase the quality in the fusion process. As a result, a neuroimaging fusion process was obtained based on a hybrid Wavelet and Curvelet scheme that guarantees high quality merged images(AU)


Subject(s)
Humans , Male , Female , Algorithms , Magnetic Resonance Imaging/methods , Positron-Emission Tomography/methods , Wavelet Analysis , Neuroimaging/methods , Cerebral Ventricle Neoplasms/diagnostic imaging
12.
Rev. cuba. invest. bioméd ; 39(3): e500, jul.-set. 2020. tab, graf
Article in Spanish | CUMED, LILACS | ID: biblio-1138929

ABSTRACT

Introducción: El delineador de señales electrocardiográficas (ECG) multiderivación basado en la transformada wavelet posee alta resolución espacial y permite eliminar las diferencias interderivación que aparecen tradicionalmente en los métodos uniderivación. Para esto necesita de derivaciones de señales electrocardiográficas ortogonales entre sí para la obtención de un bucle espacial. Objetivo: Desarrollar métodos de ortogonalización de dos o tres derivaciones de señales electrocardiográficas que permitan la generalización del delineador multiderivación basado en la transformada wavelet en cualquier base de datos señales electrocardiográficas con más de una derivación. Métodos: Se implementaron tres métodos de ortogonalización de derivaciones de señales electrocardiográficas: ortogonalización de dos derivaciones a partir de la proyección de vectores, ortogonalización a partir de componentes principales y ortogonalización a partir del método clásico de Gram-Schmidt. Resultados: Se comparó el funcionamiento del delineador multiderivación de ECG cuando es usado cada método de ortogonalización, mediante el cálculo de la media aritmética y la desviación estándar teniendo en cuenta diferentes combinaciones de derivaciones de ambas bases de datos para cada una de las marcas analizadas. Los mejores resultados se obtuvieron con el método análisis de componentes principales y el peor comportamiento con el método de ortogonalización de dos derivaciones. Conclusiones: Los algoritmos de ortogonalización que obtuvieron los mejores resultados fueron los basados en tres derivaciones ortogonales, en la que fue ligeramente superior la descomposición en componentes principales y, por tanto, se considera el método más adecuado para la generalización del delineador multiderivación(AU)


Introduction: The wavelet transform-based multiderivation electrocardiographic (ECG) signal delineator has high spatial resolution and makes it possible to eliminate interderivation differences traditionally appearing in uniderivation methods. But this requires electrocardiographic signal derivations orthogonal to one another to obtain a spatial loop. Objective: Develop orthogonalization methods of two or three electrographic signal derivations allowing generalization of the wavelet transform-based multiderivation delineator in any electrographic signal database with more than one derivation. Methods: Three orthogonalization methods were implemented for electrocardiographic signal derivations: vector projection-based two-derivation orthogonalization, principal component-based orthogonalization, and orthogonalization based on the Gram-Schmidt classic method. Results: A comparison was performed between the operation of the ECG multiderivation delineator when used with each orthogonalization method. The comparison was based on estimation of the arithmetic mean and standard deviation bearing in mind different combinations of derivations from both databases for each of the marks analyzed. The best results were obtained with the principal component analysis method and the worst ones with the two-derivation orthogonalization method. Conclusions: The orthogonalization algorithms obtaining the best results were those based on three orthogonal derivations, in which decomposition into principal components was slightly higher. This is therefore considered to be the most appropriate method for generalization of the multiderivation delineator(AU)


Subject(s)
Humans , Male , Female , Algorithms , Principal Component Analysis/methods , Electrocardiography/methods , Wavelet Analysis
13.
Rev. cuba. inform. méd ; 12(1)ene.-jun. 2020. tab, graf
Article in Spanish | CUMED, LILACS | ID: biblio-1126554

ABSTRACT

Técnicas como la Tomografía por Emisión de Positrones y la Tomografía Computarizada permiten determinar la naturaleza maligna o benigna de un tumor y estudiar las estructuras anatómicas del cuerpo con imágenes de alta resolución, respectivamente. Investigadores a nivel internacional han utilizado diferentes técnicas para la fusión de la Tomografía por Emisión de Positrones y la Tomografía Computarizada porque permite observar las funciones metabólicas en correlación con las estructuras anatómicas. La presente investigación se propone realizar un análisis y selección de algoritmos que propicien la fusión de neuroimágenes, basado en la precisión de los mismos. De esta forma contribuir al desarrollo de software para la fusión sin necesidad de adquirir los costosos equipos de adquisición de imágenes de alto rendimiento, los cuales son costosos. Para el estudio se aplicaron los métodos Análisis documental, Histórico lógico e Inductivo deductivo. Se analizaron e identificaron las mejores variantes de algoritmos y técnicas para la fusión según la literatura reportada. A partir del análisis de estas técnicas se identifica como mejor variante el esquema de fusión basado en Wavelet para la fusión de las imágenes. Para el corregistro se propone la interpolación Bicúbica. Como transformada discreta de Wavelet se evidencia el uso de la de Haar. Además, la investigación propició desarrollar el esquema de fusión basado en las técnicas anteriores. A partir del análisis realizado se constataron las aplicaciones y utilidad de las técnicas de fusión como sustitución a los altos costos de adquisición de escáneres multifunción PET/CT para Cuba(AU)


Techniques such as Positron Emission Tomography and Computed Tomography allow to determine the malignant or benign nature of a tumor and to study the anatomical structures of the body with high resolution images, respectively. International researchers have used different techniques for the fusion of Positron Emission Tomography and Computed Tomography because it allows observing metabolic functions in correlation with anatomical structures. The present investigation proposes to carry out an analysis and selection of algorithms that favor the fusion of neuroimaging, based on their precision. In this way, contribute to the development of fusion software without the need to purchase expensive high-performance imaging equipment, which is expensive. For the study the documentary analysis, logical historical and deductive inductive methods were applied. The best algorithm variants and techniques for fusion were analyzed and identified according to the reported literature. From the analysis of these techniques, the Wavelet-based fusion scheme for image fusion is identified as the best variant. Bicubic interpolation is proposed for co-registration. As a discrete Wavelet transform, the use of Haar's is evidenced. In addition, the research led to the development of the fusion scheme based on the previous techniques. From the analysis carried out, the applications and usefulness of fusion techniques were verified as a substitute for the high costs of acquiring PET / CT multifunction scanners for Cuba(AU)


Subject(s)
Humans , Male , Female , Image Processing, Computer-Assisted/methods , Software/standards , Tomography, X-Ray Computed/methods , Positron-Emission Tomography/methods , Wavelet Analysis , Cuba
14.
Journal of Biomedical Engineering ; (6): 271-279, 2020.
Article in Chinese | WPRIM | ID: wpr-828170

ABSTRACT

Spike recorded by multi-channel microelectrode array is very weak and susceptible to interference, whose noisy characteristic affects the accuracy of spike detection. Aiming at the independent white noise, correlation noise and colored noise in the process of spike detection, combining principal component analysis (PCA), wavelet analysis and adaptive time-frequency analysis, a new denoising method (PCWE) that combines PCA-wavelet (PCAW) and ensemble empirical mode decomposition is proposed. Firstly, the principal component was extracted and removed as correlation noise using PCA. Then the wavelet-threshold method was used to remove the independent white noise. Finally, EEMD was used to decompose the noise into the intrinsic modal function of each layer and remove the colored noise. The simulation results showed that PCWE can increase the signal-to-noise ratio by about 2.67 dB and decrease the standard deviation by about 0.4 μV, which apparently improved the accuracy of spike detection. The results of measured data showed that PCWE can increase the signal-to-noise ratio by about 1.33 dB and reduce the standard deviation by about 18.33 μV, which showed its good denoising performance. The results of this study suggests that PCWE can improve the reliability of spike signal and provide an accurate and effective spike denoising new method for the encoding and decoding of neural signal.


Subject(s)
Algorithms , Microelectrodes , Principal Component Analysis , Reproducibility of Results , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Wavelet Analysis
15.
Rev. Soc. Bras. Med. Trop ; 53: e20190470, 2020. tab, graf
Article in English | SES-SP, ColecionaSUS, LILACS | ID: biblio-1136864

ABSTRACT

Abstract INTRODUCTION: Tuberculosis is listed among the top 10 causes of deaths worldwide. The resistant strains causing this disease have been considered to be responsible for public health emergencies and health security threats. As stated by the World Health Organization (WHO), around 558,000 different cases coupled with resistance to rifampicin (the most operative first-line drug) have been estimated to date. Therefore, in order to detect the resistant strains using the genomes of Mycobacterium tuberculosis (MTB), we propose a new methodology for the analysis of genomic similarities that associate the different levels of decomposition of the genome (discrete non-decimated wavelet transform) and the Hurst exponent. METHODS: The signals corresponding to the ten analyzed sequences were obtained by assessing GC content, and then these signals were decomposed using the discrete non-decimated wavelet transform along with the Daubechies wavelet with four null moments at five levels of decomposition. The Hurst exponent was calculated at each decomposition level using five different methods. The cluster analysis was performed using the results obtained for the Hurst exponent. RESULTS: The aggregated variance, differenced aggregated variance, and aggregated absolute value methods presented the formation of three groups, whereas the Peng and R/S methods presented the formation of two groups. The aggregated variance method exhibited the best results with respect to the group formation between similar strains. CONCLUSION: The evaluation of Hurst exponent associated with discrete non-decimated wavelet transform can be used as a measure of similarity between genome sequences, thus leading to a refinement in the analysis.


Subject(s)
Humans , Genome, Bacterial/genetics , Wavelet Analysis , Models, Genetic , Mycobacterium tuberculosis/genetics
16.
Journal of Biomedical Engineering ; (6): 775-785, 2020.
Article in Chinese | WPRIM | ID: wpr-879204

ABSTRACT

Denoising methods based on wavelet analysis and empirical mode decomposition cannot essentially track and eliminate noise, which usually cause distortion of heart sounds. Based on this problem, a heart sound denoising method based on improved minimum control recursive average and optimally modified log-spectral amplitude is proposed in this paper. The proposed method uses a short-time window to smoothly and dynamically track and estimate the minimum noise value. The noise estimation results are used to obtain the optimal spectrum gain function, and to minimize the noise by minimizing the difference between the clean heart sound and the estimated clean heart sound. In addition, combined with the subjective analysis of spectrum and the objective analysis of contribution to normal and abnormal heart sound classification system, we propose a more rigorous evaluation mechanism. The experimental results show that the proposed method effectively improves the time-frequency features, and obtains higher scores in the normal and abnormal heart sound classification systems. The proposed method can help medical workers to improve the accuracy of their diagnosis, and also has great reference value for the construction and application of computer-aided diagnosis system.


Subject(s)
Humans , Algorithms , Heart Sounds , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Wavelet Analysis
17.
J. pediatr. (Rio J.) ; 95(6): 674-681, Nov.-Dec. 2019. graf
Article in English | LILACS | ID: biblio-1056656

ABSTRACT

ABSTRACT Objective: The objective of this study was to develop and validate a computational tool to assist radiological decisions on necrotizing enterocolitis. Methodology: Patients that exhibited clinical signs and radiographic evidence of Bell's stage 2 or higher were included in the study, resulting in 64 exams. The tool was used to classify localized bowel wall thickening and intestinal pneumatosis using full-width at half-maximum measurements and texture analyses based on wavelet energy decomposition. Radiological findings of suspicious bowel wall thickening and intestinal pneumatosis loops were confirmed by both patient surgery and histopathological analysis. Two experienced radiologists selected an involved bowel and a normal bowel in the same radiography. The full-width at half-maximum and wavelet-based texture feature were then calculated and compared using the Mann-Whitney U test. Specificity, sensibility, positive and negative predictive values were calculated. Results: The full-width at half-maximum results were significantly different between normal and distended loops (median of 10.30 and 15.13, respectively). Horizontal, vertical, and diagonal wavelet energy measurements were evaluated at eight levels of decomposition. Levels 7 and 8 in the horizontal direction presented significant differences. For level 7, median was 0.034 and 0.088 for normal and intestinal pneumatosis groups, respectively, and for level 8 median was 0.19 and 0.34, respectively. Conclusions: The developed tool could detect differences in radiographic findings of bowel wall thickening and IP that are difficult to diagnose, demonstrating the its potential in clinical routine. The tool that was developed in the present study may help physicians to investigate suspicious bowel loops, thereby considerably improving diagnosis and clinical decisions.


RESUMO Objetivo: O objetivo deste estudo foi desenvolver e validar uma ferramenta computacional para auxiliar as decisões radiológicas na enterocolite necrotizante. Metodologia: Pacientes que exibiam sinais clínicos e evidências radiográficas do estágio 2 ou superior de Bell foram incluídos no estudo, que resultou em 64 exames. A ferramenta foi usada para classificar o aumento localizado da espessura da parede intestinal e a pneumatose intestinal com medidas de largura total a meia altura e análises de textura baseadas na decomposição da energia wavelet. Os achados radiológicos de aumento suspeito da espessura da parede intestinal e das alças na pneumatose intestinal foram confirmados pela cirurgia e análise histopatológica do paciente. Dois radiologistas experientes selecionaram um intestino afetado e um intestino normal na mesma radiografia. A largura total a meia altura e a característica da textura baseada em wavelet foram então calculadas e comparadas com o uso do teste U de Mann-Whitney. Foram calculados a especificidade, sensibilidade, valores preditivos positivos e negativos. Resultados: Os resultados da largura total a meia altura foram significativamente diferentes entre a alça normal e a distendida (mediana de 10,30 e 15,13, respectivamente). Medidas de energia wavelet horizontal, vertical e diagonal foram avaliadas em oito níveis de decomposição. Os níveis 7 e 8 na direção horizontal apresentaram diferenças significativas. Para o nível 7, as medianas foram 0,034 e 0,088 para os grupos normal e com pneumatose intestinal, respectivamente, e para o nível 8, as medianas foram 0,19 e 0,34, respectivamente. Conclusões: A ferramenta desenvolvida pode detectar diferenças nos achados radiográficos do aumento da espessura da parede intestinal e PI de difícil diagnóstico, demonstra seu potencial na rotina clínica. A ferramenta desenvolvida no presente estudo pode ajudar os médicos a investigar alças intestinais suspeitas e melhorar consideravelmente o diagnóstico e as decisões clínicas.


Subject(s)
Humans , Infant, Newborn , Enterocolitis, Necrotizing/diagnostic imaging , Infant, Newborn, Diseases/diagnostic imaging , Severity of Illness Index , Image Processing, Computer-Assisted , Software Validation , Radiography, Abdominal , Retrospective Studies , Sensitivity and Specificity , Statistics, Nonparametric , Wavelet Analysis , Intestines/physiopathology
18.
Chinese Journal of Medical Instrumentation ; (6): 341-344, 2019.
Article in Chinese | WPRIM | ID: wpr-772490

ABSTRACT

OBJECTIVE@#A method for dynamically collecting and processing ECG signals was designed to obtain classification information of abnormal ECG signals.@*METHODS@#Firstly, the ECG eigenvectors were acquired by real-time acquisition of ECG signals combined with discrete wavelet transform, and then the ECG fuzzy information entropy was calculated. Finally, the Euclidean distance was used to obtain the semantic distance of ECG signals, and the classification information of abnormal signals was obtained.@*RESULTS@#The device could effectively identify abnormal ECG signals on an embedded platform based on the Internet of Things, and improved the diagnosis accuracy of heart diseases.@*CONCLUSIONS@#The fuzzy diagnosis device of ECG signal could accurately classify the abnormal signal and output an online signal classification matrix with a high confidence interval.


Subject(s)
Humans , Algorithms , Arrhythmias, Cardiac , Electrocardiography , Fuzzy Logic , Heart Diseases , Diagnosis , Internet , Signal Processing, Computer-Assisted , Wavelet Analysis
19.
Biomedical Engineering Letters ; (4): 387-394, 2019.
Article in English | WPRIM | ID: wpr-785514

ABSTRACT

This paper presents a new class of local neighborhood based wavelet feature descriptor (LNWFD) for content based medical image retrieval (CBMIR). To retrieve images effectively from large medical databases is backbone of diagnosis. Existing wavelet transform based medical image retrieval methods suffer from high length feature vector with confined retrieval performance. Triplet half-band filter bank (THFB) enhanced the properties of wavelet filters using three kernels. The influence of THFB has employed in the proposed method. First, triplet half-band filter bank (THFB) is used for single level wavelet decomposition to obtain four sub-bands. Next, the relationship among wavelet coefficients is exploited at each sub-band using 3 × 3 neighborhood window to form LNWFD pattern. The novelty of the proposed descriptor lies in exploring relation between wavelet transform values of pixels rather than intensity values which gives more detail local information in wavelet sub-bands. Thus, proposed feature descriptor is robust against illumination. Manhattan distance is used to compute similarity between query feature vector and feature vector of database. The proposed method is tested for medical image retrieval using OASIS-MRI, NEMA-CT, and Emphysema-CT databases. The average retrieval precisions achieved are 71.45%, 99.51% of OASIS-MRI and NEMA-CT databases for top ten matches considered respectively and 55.51% of Emphysema-CT database for top 50 matches. The superiority in terms of performance of the proposed method is confirmed by the experimental results over the well-known existing descriptors.


Subject(s)
Humans , Diagnosis , Lighting , Methods , Residence Characteristics , Subject Headings , Triplets , Wavelet Analysis
20.
West Indian med. j ; 67(3): 243-247, July-Sept. 2018. tab, graf
Article in English | LILACS | ID: biblio-1045851

ABSTRACT

ABSTRACT This paper presents an improved classification system for brain tumours using wavelet transform and neural network. The anisotropic diffusion filter was used for image denoising, and the performance of the oriented rician noise reducing anisotropic diffusion (ORNRAD) filter was validated. The segmentation of the denoised image was carried out by fuzzy c-means clustering. The features were extracted using symlet and coiflet wavelet transforms, and the Levenberg-Marquardt algorithm based neural network was used to classify the magnetic resonance (MR) images. This classification technique of MR images was tested and analysed with existing methods, and its performance was found to be satisfactory with a classification accuracy of 93.24%. The developed system could assist physicians in classifying MR images for better decision-making.


RESUMEN Este artículo presenta un sistema de clasificación mejorado para los tumores de cerebro usando la transformada de ondeletas (transformada wavelet) y la red neuronal. El filtro de difusión anisotrópica fue utilizado para la eliminación del ruido de la imagen, y se validó el funcionamiento del filtro de difusión anisotrópica orientado a reducir el ruido riciano (ORNRAD, siglas en inglés). La segmentación de la imagen 'desruidizada ' (denoised) fue realizada mediante el agrupamiento difuso c-means fuzzy. Las características fueron extraídas usando las transformadas de ondeletas symlet y coiflet, y la red neuronal basada en el algoritmo de Levenberg-Marquardt fue utilizada para clasificar las imágenes de resonancia magnética (RM) imágenes. Esta técnica de clasificación de imágenes de RM fue probada y analizada con métodos existentes, y se halló que su rendimiento era satisfactorio con una precisión de clasificación de 93.24%. El sistema desarrollado podría ayudar a los médicos a clasificar imágenes de RM para una mejor toma de decisiones.


Subject(s)
Humans , Brain Neoplasms/classification , Brain Neoplasms/diagnostic imaging , Wavelet Analysis , Nerve Net/diagnostic imaging , Magnetic Resonance Imaging
SELECTION OF CITATIONS
SEARCH DETAIL